klotz: prompt engineering*

Bookmarks on this page are managed by an admin user.

0 bookmark(s) - Sort by: Date / Title ↑ / - Bookmarks from other users for this tag

  1. - Prompt engineering is about experimenting with changes in prompts to understand their impacts on what large language models (LLMs) generate as the output. Prompt engineering yields better outcomes for LLM use with a few basic techniques
    - Zero-shot prompting is when an LLM is given a task, via prompt, for which the model has not previously seen data
    - For the language tasks in the literature, performance improves with a few examples, this is known as few-shot prompting
    - Chain-of-Thought (CoT) prompting breaks down multi-step problems into intermediate steps allowing LLMs to tackle complex reasoning that can't be solved with zero-shot or few-shot prompting
    - Built upon CoT, self-consistency prompting is an advanced prompting technique, that provides the LLM with multiple, diverse reasoning paths and then selects the most consistent answer among the generated responses
    2024-01-20 Tags: , by klotz
  2. An in-depth guide about Mistral 7B, a 7-billion-parameter language model released by Mistral AI. This guide includes an introduction to the model, its capabilities, code generation, limitations, guardrails, and enforcing guardrails. It also covers applications, papers, and additional reading materials related to Mistral 7B and finetuned models.
  3. How computationally optimized prompts make language models excel, and how this all affects prompt engineering
  4. This tutorial introduces promptrefiner, a tool created by Amirarsalan Rajabi that uses the GPT-4 model to create perfect system prompts for local LLMs.

Top of the page

First / Previous / Next / Last / Page 2 of 0 SemanticScuttle - klotz.me: Tags: prompt engineering

About - Propulsed by SemanticScuttle